38 research outputs found

    Static 3D Triangle Mesh Compression Overview

    Get PDF
    3D triangle meshes are extremely used to model discrete surfaces, and almost always represented with two tables: one for geometry and another for connectivity. While the raw size of a triangle mesh is of around 200 bits per vertex, by coding cleverly (and separately) those two distinct kinds of information it is possible to achieve compression ratios of 15:1 or more. Different techniques must be used depending on whether single-rate vs. progressive bitstreams are sought; and, in the latter case, on whether or not hierarchically nested meshes are desirable during reconstructio

    Multi-Resolution Texture Coding for Multi-Resolution 3D Meshes

    Full text link
    We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder

    ITEM: Inter-Texture Error Measurement for 3D Meshes

    Get PDF
    We introduce a simple and innovative method to compare any two texture maps, regardless of their sizes, aspect ratios, or even masks, as long as they are both meant to be mapped onto the same 3D mesh. Our system is based on a zero-distortion 3D mesh unwrapping technique which compares two new adapted texture atlases with the same mask but different texel colors, and whose every texel covers the same area in 3D. Once these adapted atlases are created, we measure their difference with ITEM-RMSE, a slightly modified version of the standard RMSE defined for images. ITEM-RMSE is more meaningful and reliable than RMSE because it only takes into account the texels inside the mask, since they are the only ones that will actually be used during rendering. Our method is not only very useful to compare the space efficiency of different texture atlas generation algorithms, but also to quantify texture loss in compression schemes for multi-resolution textured 3D meshes

    3D facial merging for virtual human reconstruction

    Full text link
    There is an increasing need of easy and affordable technologies to automatically generate virtual 3D models from their real counterparts. In particular, 3D human reconstruction has driven the creation of many clever techniques, most of them based on the visual hull (VH) concept. Such techniques do not require expensive hardware; however, they tend to yield 3D humanoids with realistic bodies but mediocre faces, since VH cannot handle concavities. On the other hand, structured light projectors allow to capture very accurate depth data, and thus to reconstruct realistic faces, but they are too expensive to use several of them. We have developed a technique to merge a VH-based 3D mesh of a reconstructed humanoid and the depth data of its face, captured by a single structured light projector. By combining the advantages of both systems in a simple setting, we are able to reconstruct realistic 3D human models with believable faces

    Modelado jerárquico de objetos 3D con superficies de subdivisión

    Get PDF
    Las SSs (Superficies de Subdivisión) son un potente paradigma de modelado de objetos 3D (tridimensionales) que establece un puente entre los dos enfoques tradicionales a la aproximación de superficies, basados en mallas poligonales y de parches alabeados, que conllevan problemas uno y otro. Los esquemas de subdivisión permiten definir una superficie suave (a tramos), como las más frecuentes en la práctica, como el límite de un proceso recursivo de refinamiento de una malla de control burda, que puede ser descrita muy compactamente. Además, la recursividad inherente a las SSs establece naturalmente una relación de anidamiento piramidal entre las mallas / NDs (Niveles de Detalle) generadas/os sucesivamente, por lo que las SSs se prestan extraordinariamente al AMRO (Análisis Multiresolución mediante Ondículas) de superficies, que tiene aplicaciones prácticas inmediatas e interesantísimas, como la codificación y la edición jerárquicas de modelos 3D. Empezamos describiendo los vínculos entre las tres áreas que han servido de base a nuestro trabajo (SSs, extracción automática de NDs y AMRO) para explicar como encajan estas tres piezas del puzzle del modelado jerárquico de objetos de 3D con SSs. El AMRO consiste en descomponer una función en una versión burda suya y un conjunto de refinamientos aditivos anidados jerárquicamente llamados "coeficientes ondiculares". La teoría clásica de ondículas estudia las señales clásicas nD: las definidas sobre dominios paramétricos homeomorfos a R" o (0,1)n como el audio (n=1), las imágenes (n=2) o el vídeo (n=3). En topologías menos triviales, como las variedades 2D) (superficies en el espacio 3D), el AMRO no es tan obvio, pero sigue siendo posible si se enfoca desde la perspectiva de las SSs. Basta con partir de una malla burda que aproxime a un bajo ND la superficie considerada, subdividirla recursivamente y, al hacerlo, ir añadiendo los coeficientes ondiculares, que son los detalles 3D necesarios para obtener aproximaciones más y más finas a la superficie original. Pasamos después a las aplicaciones prácticas que constituyen nuestros principal desarrollo original y, en particular, presentamos una técnica de codificación jerárquica de modelos 3D basada en SSs, que actúa sobre los detalles 3D mencionados: los expresa en un referencial normal loscal; los organiza según una estructura jerárquica basada en facetas; los cuantifica dedicando menos bits a sus componentes tangenciales, menos energéticas, y los "escalariza"; y los codifica dinalmente gracias a una técnica similar al SPIHT (Set Partitioning In Hierarchical Tress) de Said y Pearlman. El resultado es un código completamente embebido y al menos dos veces más compacto, para superficies mayormente suaves, que los obtenidos con técnicas de codificación progresiva de mallas 3D publicadas previamente, en las que además los NDs no están anidados piramidalmente. Finalmente, describimos varios métodos auxiliares que hemos desarrollado, mejorando técnicas previas y creando otras propias, ya que una solución completa al modelado de objetos 3D con SSs requiere resolver otros dos problemas. El primero es la extracción de una malla base (triangular, en nuestro caso) de la superficie original, habitualmente dada por una malla triangular fina con conectividad arbitraria. El segundo es la generación de un remallado recursivo con conectividad de subdivisión de la malla original/objetivo mediante un refinamiento recursivo de la malla base, calculando así los detalles 3D necesarios para corregir las posiciones predichas por la subdivisión para nuevos vértices

    COLLADA + MPEG-4 or X3D + MPEG-4

    Full text link
    The paper is an overview of 3D graphics assets and applications standards.The authors analyzed the three main open standards dealing with three-dimensional (3-D) graphics content and applications, X3D, COLLADA, and MPEG4, to clarify the role of each with respect to the following criteria: ability to describe only the graphics assets in a synthetic 3-D scene or also its behavior as an application, compression capacities, and appropriateness for authoring, transmission, and publishing. COLLADA could become the interchange format for authoring tools; MPEG4 on top of it (as specified in MPEG-4 Part 25), the publishing format for graphics assets; and X3D, the standard for interactive applications, enriched by MPEG-4 compression in the case of online ones. The authors also mentioned that in order to build a mobile application, a developer has to consider different hardware configurations and performances, different operating systems, different screen sizes, and input controls

    A parallel implementation of 3D Zernike moment analysis

    Get PDF
    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system

    Moving object detection strategy for augmented-reality applications in a GPGPU by using CUDA

    Get PDF
    A spatial-color-based non-parametric background-foreground modeling strategy in a GPGPU by using CUDA is proposed. This strategy is suitable for augmented-reality applications, providing real-time high-quality results in a great variety of scenarios

    Composition of texture atlases for 3D mesh multi-texturing

    Get PDF
    We introduce an automatic technique for mapping onto a 3D triangle mesh, approximating the shape of a real 3D object, a high resolution texture synthesized from several pictures taken simultaneously by real cameras surrounding the object. We create a texture atlas by first unwrapping the 3D mesh to form a set of 2D patches with no distortion (i.e., the angles and relative sizes of the 3D triangles are preserved in the atlas), and then mixing the color information from the input images, through another three steps: step no. 2 packs the 2D patches so that the bounding canvas of the set is as small as possible; step no. 3 assigns at most one triangle to each canvas pixel; finally, in step no. 4, the color of each pixel is calculated as a smoothly varying weighted average of the corresponding pixels from several input photographs. Our method is especially good for the creation of realistic 3D models without the need of having graphic artists retouch the texture. Categories and Subject Descriptors (according to ACM CCS): Computer Graphics [1.3.7]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and textur

    Moving object detection for real-time augmented reality applications in a GPGPU

    Get PDF
    The last generation of consumer electronic devices is endowed with Augmented Reality (AR) tools. These tools require moving object detection strategies, which should be fast and efficient, to carry out higher level object analysis tasks. We propose a lightweight spatio-temporal-based non-parametric background-foreground modeling strategy in a General Purpose Graphics Processing Unit (GPGPU), which provides real-time high-quality results in a great variety of scenarios and is suitable for AR applications
    corecore